25 research outputs found

    Adaptive Bernstein-von Mises theorems in Gaussian white noise

    Get PDF
    We investigate Bernstein-von Mises theorems for adaptive nonparametric Bayesian procedures in the canonical Gaussian white noise model. We consider both a Hilbert space and multiscale setting with applications in L2L^2 and L∞L^\infty respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets based on the posterior distribution. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the geometries involved.Comment: 48 pages, 5 figure

    Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions

    Full text link
    The problem of determining a periodic Lipschitz vector field b=(b1,…,bd)b=(b_1, \dots, b_d) from an observed trajectory of the solution (Xt:0≀t≀T)(X_t: 0 \le t \le T) of the multi-dimensional stochastic differential equation \begin{equation*} dX_t = b(X_t)dt + dW_t, \quad t \geq 0, \end{equation*} where WtW_t is a standard dd-dimensional Brownian motion, is considered. Convergence rates of a penalised least squares estimator, which equals the maximum a posteriori (MAP) estimate corresponding to a high-dimensional Gaussian product prior, are derived. These results are deduced from corresponding contraction rates for the associated posterior distributions. The rates obtained are optimal up to log-factors in L2L^2-loss in any dimension, and also for supremum norm loss when d≀4d \le 4. Further, when d≀3d \le 3, nonparametric Bernstein-von Mises theorems are proved for the posterior distributions of bb. From this we deduce functional central limit theorems for the implied estimators of the invariant measure ΞΌb\mu_b. The limiting Gaussian process distributions have a covariance structure that is asymptotically optimal from an information-theoretic point of view.Comment: 55 pages, to appear in the Annals of Statistic

    A Bayesian nonparametric approach to log-concave density estimation

    Full text link
    The estimation of a log-concave density on R\mathbb{R} is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We also present two computationally more feasible approximations and a more practical empirical Bayes approach, which are illustrated numerically via simulations.Comment: 39 pages, 17 figures. Simulation studies were significantly expanded and one more theorem has been adde

    The Le Cam distance between density estimation, Poisson processes and Gaussian white noise

    Get PDF
    It is well-known that density estimation on the unit interval is asymptotically equivalent to a Gaussian white noise experiment, provided the densities have H\"older smoothness larger than 1/21/2 and are uniformly bounded away from zero. We derive matching lower and constructive upper bounds for the Le Cam deficiencies between these experiments, with explicit dependence on both the sample size and the size of the densities in the parameter space. As a consequence, we derive sharp conditions on how small the densities can be for asymptotic equivalence to hold. The related case of Poisson intensity estimation is also treated.Comment: Some results from an earlier version of this preprint have been moved to arXiv:1802.0342

    Pointwise uncertainty quantification for sparse variational Gaussian process regression with a Brownian motion prior

    Full text link
    We study pointwise estimation and uncertainty quantification for a sparse variational Gaussian process method with eigenvector inducing variables. For a rescaled Brownian motion prior, we derive theoretical guarantees and limitations for the frequentist size and coverage of pointwise credible sets. For sufficiently many inducing variables, we precisely characterize the asymptotic frequentist coverage, deducing when credible sets from this variational method are conservative and when overconfident/misleading. We numerically illustrate the applicability of our results and discuss connections with other common Gaussian process priors.Comment: 24 pages, 1 figure, to appear in Advances in Neural Information Processing Systems 37 (NeurIPS 2023

    Debiased Bayesian inference for average treatment effects

    Get PDF
    Bayesian approaches have become increasingly popular in causal inference problems due to their conceptual simplicity, excellent performance and in-built uncertainty quantification ('posterior credible sets'). We investigate Bayesian inference for average treatment effects from observational data, which is a challenging problem due to the missing counterfactuals and selection bias. Working in the standard potential outcomes framework, we propose a data-driven modification to an arbitrary (nonparametric) prior based on the propensity score that corrects for the first-order posterior bias, thereby improving performance. We illustrate our method for Gaussian process (GP) priors using (semi-)synthetic data. Our experiments demonstrate significant improvement in both estimation accuracy and uncertainty quantification compared to the unmodified GP, rendering our approach highly competitive with the state-of-the-art.Comment: NeurIPS 201

    Bayesian estimation in a multidimensional diffusion model with high frequency data

    Full text link
    We consider nonparametric Bayesian inference in a multidimensional diffusion model with reflecting boundary conditions based on discrete high-frequency observations. We prove a general posterior contraction rate theorem in L2L^2-loss, which is applied to Gaussian priors. The resulting posteriors, as well as their posterior means, are shown to converge to the ground truth at the minimax optimal rate over H\"older smoothness classes in any dimension. Of independent interest and as part of our proofs, we show that certain frequentist penalized least squares estimators are also minimax optimal.Comment: 56 pages, 1 figur

    Debiased Bayesian inference for average treatment effects

    Get PDF
    No abstract availabl
    corecore